In numerical analysis, the Runge–Kutta methods (German pronunciation: [ˌʁʊŋəˈkʊta]) are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.
See the article on numerical ordinary differential equations for more background and other methods. See also List of Runge–Kutta methods.
Contents |
One member of the family of Runge–Kutta methods is so commonly used that it is often referred to as "RK4", "classical Runge-Kutta method" or simply as "the Runge–Kutta method".
Let an initial value problem be specified as follows.
In words, what this means is that the rate at which y changes is a function of y and of t (time). At the start, time is and y is .
The RK4 method for this problem is given by the following equations:
where is the RK4 approximation of , and
Thus, the next value () is determined by the present value () plus the weighted average of 4 deltas, where each delta is the product of the size of the interval () and an estimated slope: .
In averaging the four deltas, greater weight is given to the deltas at the midpoint:
The RK4 method is a fourth-order method, meaning that the error per step is on the order of , while the total accumulated error has order .
Note that the above formulae are valid for both scalar- and vector-valued functions (i.e., can be a vector and an operator). For example one can integrate the time independent Schrödinger equation using the Hamiltonian operator as function .
Also note that if is independent of , so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule.
The family of explicit Runge–Kutta methods is a generalization of the RK4 method mentioned above. It is given by
where
To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher):
0 | ||||||
The Runge–Kutta method is consistent if
There are also accompanying requirements if we require the method to have a certain order p, meaning that the local truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a 2-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2.
The RK4 method falls in this framework. Its tableau is:
0 | |||||
1/2 | 1/2 | ||||
1/2 | 0 | 1/2 | |||
1 | 0 | 0 | 1 | ||
1/6 | 1/3 | 1/3 | 1/6 |
However, the simplest Runge–Kutta method is the (forward) Euler method, given by the formula . This is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is:
0 | ||
1 |
An example of a second-order method with two stages is provided by the midpoint method
The corresponding tableau is:
0 | |||
1/2 | 1/2 | ||
0 | 1 |
Note that this 'midpoint' method is not the optimal RK2 method. An alternative is provided by Heun's method, where the 1/2's in the tableau above are replaced by 1's and the b's row is [1/2, 1/2].
In fact, a family of RK2 methods is where is the mid-point method and is Heun's method.
If one wants to minimize the truncation error, the method below should be used (Atkinson p. 423). Other important methods are Fehlberg, Cash-Karp and Dormand-Prince. To use unequally spaced intervals requires an adaptive stepsize method.
The following is an example usage of a two-stage explicit Runge–Kutta method:
0 | |||
2/3 | 2/3 | ||
1/4 | 3/4 |
to solve the initial-value problem
with step size h=0.025.
The tableau above yields the equivalent corresponding equations below defining the method
The numerical solutions correspond to the underlined values. Note that has been calculated to avoid recalculation in the s.
The adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step. This is done by having two methods in the tableau, one with order and one with order .
The lower-order step is given by
where the are the same as for the higher order method. Then the error is
which is . The Butcher Tableau for this kind of method is extended to give the values of :
0 | ||||||
The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher Tableau is:
0 | |||||||
1/4 | 1/4 | ||||||
3/8 | 3/32 | 9/32 | |||||
12/13 | 1932/2197 | −7200/2197 | 7296/2197 | ||||
1 | 439/216 | −8 | 3680/513 | -845/4104 | |||
1/2 | −8/27 | 2 | −3544/2565 | 1859/4104 | −11/40 | ||
16/135 | 0 | 6656/12825 | 28561/56430 | −9/50 | 2/55 | ||
25/216 | 0 | 1408/2565 | 2197/4104 | −1/5 | 0 |
However, the simplest adaptive Runge–Kutta method involves combining the Heun method, which is order 2, with the Euler method, which is order 1. Its extended Butcher Tableau is:
0 | |||
1 | 1 | ||
1/2 | 1/2 | ||
1 | 0 |
The error estimate is used to control the stepsize.
Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4).
The implicit methods are more general than the explicit ones. The distinction shows up in the Butcher Tableau: for an implicit method, the coefficient matrix is not necessarily lower triangular:
The approximate solution to the initial value problem reflects the greater number of coefficients:
Due to the fullness of the matrix , the evaluation of each is now considerably involved and dependent on the specific function . Despite the difficulties, implicit methods are of great importance due to their high (possibly unconditional) stability, which is especially important in the solution of partial differential equations. The simplest example of an implicit Runge–Kutta method is the backward Euler method:
The Butcher Tableau for this is simply:
It can be difficult to make sense of even this simple implicit method, as seen from the expression for :
In this case, the awkward expression above can be simplified by noting that
so that
from which
follows. Though simpler than the "raw" representation before manipulation, this is an implicit relation so that the actual solution is problem dependent. Multistep implicit methods have been used with success by some researchers. The combination of stability, higher order accuracy with fewer steps, and stepping that depends only on the previous value makes them attractive; however the complicated problem-specific implementation and the fact that must often be approximated iteratively means that they are not common.
Another example for an implicit Runge-Kutta method is the Crank–Nicolson method, also known as the trapezoid method. Its Butcher Tableau is:
|